Goto

Collaborating Authors

 boundary distance


Spatial Uncertainty Quantification in Wildfire Forecasting for Climate-Resilient Emergency Planning

Chakravarty, Aditya

arXiv.org Artificial Intelligence

Climate change is intensifying wildfire risks globally, making reliable forecasting critical for adaptation strategies. While machine learning shows promise for wildfire prediction from Earth observation data, current approaches lack uncertainty quantification essential for risk-aware decision making. We present the first systematic analysis of spatial uncertainty in wildfire spread forecasting using multimodal Earth observation inputs. We demonstrate that predictive uncertainty exhibits coherent spatial structure concentrated near fire perimeters. Our novel distance metric reveals high-uncertainty regions form consistent 20-60 meter buffer zones around predicted firelines - directly applicable for emergency planning. Feature attribution identifies vegetation health and fire activity as primary uncertainty drivers. This work enables more robust wildfire management systems supporting communities adapting to increasing fire risk under climate change.


RegD: Hierarchical Embeddings via Distances over Geometric Regions

Yang, Hui, Chen, Jiaoyan

arXiv.org Artificial Intelligence

Hierarchical data are common in many domains like life sciences and e-commerce, and their embeddings often play a critical role. Although hyperbolic embeddings offer a grounded approach to representing hierarchical structures in low-dimensional spaces, their utility is hindered by optimization difficulties in hyperbolic space and dependence on handcrafted structural constraints. We propose RegD, a novel Euclidean framework that addresses these limitations by representing hierarchical data as geometric regions with two new metrics: (1) depth distance, which preserves the representational power of hyperbolic spaces for hierarchical data, and (2) boundary distance, which explicitly encodes set-inclusion relationships between regions in a general way. Our empirical evaluation on diverse real-world datasets shows consistent performance gains over state-of-the-art methods and demonstrates RegD's potential for broader applications beyond hierarchy alone tasks.


Hot-Distance: Combining One-Hot and Signed Distance Embeddings for Segmentation

Zouinkhi, Marwan, Rhoades, Jeff L., Weigel, Aubrey V.

arXiv.org Artificial Intelligence

Data is the lifeblood of machine learning. It is crucial for practitioners to maximize the accuracy and diversity of data supplied to a model during training in order to achieve a reliable and generalizable final product. The target a model is trained to predict determines what data can be used. For instance, a model designed to produce segmentations may be trained to generate one-hot encoded or signed boundary distance predictions [Heinrich et al., 2021]. A network trained to predict the presence of a given structure in terms of a binary mask can be accurately trained on datasets indicating the objects presence, as well as data sparsely representing its absence (see section 2.1). However, this is not the case for signed boundary distance predictions (see section 2.2). As a result, datasets labeling the presence of structures mutually exclusive to the target objects, but not the targets themselves, cannot be used to train a signed boundary distance prediction model. We introduce hot-distance to incorporate the benefits of signed boundary distance prediction while maintaining the ability to train models on a larger body of datasets. An empirical study of this strategy's effects, compared to existing methods (i.e.

  Country: North America > United States > Virginia > Loudoun County > Ashburn (0.06)
  Genre: Research Report (0.50)
  Industry: Health & Medicine (0.31)

Computational Asymmetries in Robust Classification

Marro, Samuele, Lombardi, Michele

arXiv.org Artificial Intelligence

In the context of adversarial robustness, we make three strongly related contributions. First, we prove that while attacking ReLU classifiers is $\mathit{NP}$-hard, ensuring their robustness at training time is $\Sigma^2_P$-hard (even on a single example). This asymmetry provides a rationale for the fact that robust classifications approaches are frequently fooled in the literature. Second, we show that inference-time robustness certificates are not affected by this asymmetry, by introducing a proof-of-concept approach named Counter-Attack (CA). Indeed, CA displays a reversed asymmetry: running the defense is $\mathit{NP}$-hard, while attacking it is $\Sigma_2^P$-hard. Finally, motivated by our previous result, we argue that adversarial attacks can be used in the context of robustness certification, and provide an empirical evaluation of their effectiveness. As a byproduct of this process, we also release UG100, a benchmark dataset for adversarial attacks.


Membership inference attack with relative decision boundary distance

Xu, JiaCheng, Tan, ChengXiang

arXiv.org Artificial Intelligence

Membership inference attack is one of the most popular privacy attacks in machine learning, which aims to predict whether a given sample was contained in the target model's training set. Label-only membership inference attack is a variant that exploits sample robustness and attracts more attention since it assumes a practical scenario in which the adversary only has access to the predicted labels of the input samples. However, since the decision boundary distance, which measures robustness, is strongly affected by the random initial image, the adversary may get opposite results even for the same input samples. In this paper, we propose a new attack method, called muti-class adaptive membership inference attack in the label-only setting. All decision boundary distances for all target classes have been traversed in the early attack iterations, and the subsequent attack iterations continue with the shortest decision boundary distance to obtain a stable and optimal decision boundary distance. Instead of using a single boundary distance, the relative boundary distance between samples and neighboring points has also been employed as a new membership score to distinguish between member samples inside the training set and nonmember samples outside the training set. Experiments show that previous label-only membership inference attacks using the untargeted HopSkipJump algorithm fail to achieve optimal decision bounds in more than half of the samples, whereas our multi-targeted HopSkipJump algorithm succeeds in almost all samples. In addition, extensive experiments show that our multi-class adaptive MIA outperforms current label-only membership inference attacks in the CIFAR10, and CIFAR100 datasets, especially for the true positive rate at low false positive rates metric.


Evading Black-box Classifiers Without Breaking Eggs

Debenedetti, Edoardo, Carlini, Nicholas, Tramèr, Florian

arXiv.org Artificial Intelligence

Decision-based evasion attacks repeatedly query a black-box classifier to generate adversarial examples. Prior work measures the cost of such attacks by the total number of queries made to the classifier. We argue this metric is flawed. Most security-critical machine learning systems aim to weed out "bad" data (e.g., malware, harmful content, etc). Queries to such systems carry a fundamentally asymmetric cost: queries detected as "bad" come at a higher cost because they trigger additional security filters, e.g., usage throttling or account suspension. Yet, we find that existing decision-based attacks issue a large number of "bad" queries, which likely renders them ineffective against security-critical systems. We then design new attacks that reduce the number of bad queries by $1.5$-$7.3\times$, but often at a significant increase in total (non-bad) queries. We thus pose it as an open problem to build black-box attacks that are more effective under realistic cost metrics.


Boosting Adversarial Attacks by Leveraging Decision Boundary Information

Zeng, Boheng, Gao, LianLi, Zhang, QiLong, Li, ChaoQun, Song, JingKuan, Jing, ShuaiQi

arXiv.org Artificial Intelligence

Due to the gap between a substitute model and a victim model, the gradient-based noise generated from a substitute model may have low transferability for a victim model since their gradients are different. Inspired by the fact that the decision boundaries of different models do not differ much, we conduct experiments and discover that the gradients of different models are more similar on the decision boundary than in the original position. Moreover, since the decision boundary in the vicinity of an input image is flat along most directions, we conjecture that the boundary gradients can help find an effective direction to cross the decision boundary of the victim models. Based on it, we propose a Boundary Fitting Attack to improve transferability. Specifically, we introduce a method to obtain a set of boundary points and leverage the gradient information of these points to update the adversarial examples. Notably, our method can be combined with existing gradient-based methods. Extensive experiments prove the effectiveness of our method, i.e., improving the success rate by 5.6% against normally trained CNNs and 14.9% against defense CNNs on average compared to state-of-the-art transfer-based attacks. Further we compare transformers with CNNs, the results indicate that transformers are more robust than CNNs. However, our method still outperforms existing methods when attacking transformers. Specifically, when using CNNs as substitute models, our method obtains an average attack success rate of 58.2%, which is 10.8% higher than other state-of-the-art transfer-based attacks.


Generating Minimal Adversarial Perturbations with Integrated Adaptive Gradients

Xiao, Yatie, Pun, Chi-Man

arXiv.org Machine Learning

We focus our attention on the problem of generating adversarial perturbations based on the gradient in image classification domain; substantial pixel perturbations make features learned by deep neural networks changed in clean images which fool deep neural models into making incorrect predictions. However, large-scale pixel modification in the image directly makes the changes visible although attack process success. To find the optimal perturbations which can quantify the boundary distance directly between clean images and adversarial examples in latent space, we propose a novel method for integrated adversarial perturbations generation, which formulate adversarial perturbations in the adaptive and integrated gradient level. Our approach calls few adaptive gradient operators to seek the decision boundary between original images and corresponding adversarial examples directly. We compare our proposed method for crafting adversarial perturbations with other state-of-the-art gradient-based attack methods. Experimental results suggest that adversarial samples generated by our approach show excellent efficiency in fooling deep neural classification networks with lower pixel modification and good transferability on image classification models.